- Home
- Search Results
- Page 1 of 1
Search for: All records
-
Total Resources2
- Resource Type
-
0001000001000000
- More
- Availability
-
11
- Author / Contributor
- Filter by Author / Creator
-
-
Astolfi, Pietro (2)
-
Annicchiarico, Luciano (1)
-
Avesani, Paolo (1)
-
Balestriero, Randall (1)
-
Bertò, Giulia (1)
-
Bouchacourt, Diane (1)
-
Bullock, Daniel (1)
-
Cabannes, Vivien (1)
-
Cho, Kyunghyun (1)
-
Corsini, Francesco (1)
-
De Benedictis, Alessandro (1)
-
Hayashi, Soichi (1)
-
Ibrahim, Mark (1)
-
LeCun, Yann (1)
-
Olivetti, Emanuele (1)
-
Pestilli, Franco (1)
-
Sarubbo, Silvio (1)
-
Sobal, Vlad (1)
-
Zigiotto, Luca (1)
-
#Tyler Phillips, Kenneth E. (0)
-
- Filter by Editor
-
-
null (1)
-
& Spizer, S. M. (0)
-
& . Spizer, S. (0)
-
& Ahn, J. (0)
-
& Bateiha, S. (0)
-
& Bosch, N. (0)
-
& Brennan K. (0)
-
& Brennan, K. (0)
-
& Chen, B. (0)
-
& Chen, Bodong (0)
-
& Drown, S. (0)
-
& Ferretti, F. (0)
-
& Higgins, A. (0)
-
& J. Peters (0)
-
& Kali, Y. (0)
-
& Ruiz-Arias, P.M. (0)
-
& S. Spitzer (0)
-
& Sahin. I. (0)
-
& Spitzer, S. (0)
-
& Spitzer, S.M. (0)
-
-
Have feedback or suggestions for a way to improve these results?
!
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Learning good representations involves capturing the diverse ways in which data samples relate. Contrastive loss - an objective matching related samples - underlies methods from self-supervised to multimodal learning. Contrastive losses, however, can be viewed more broadly as modifying a similarity graph to indicate how samples should relate in the embedding space. This view reveals a shortcoming in contrastive learning: the similarity graph is binary, as only one sample is the related positive sample. Crucially, similarities \textit{across} samples are ignored. Based on this observation, we revise the standard contrastive loss to explicitly encode how a sample relates to others. We experiment with this new objective, called X -Sample Contrastive, to train vision models based on similarities in class or text caption descriptions. Our study spans three scales: ImageNet-1k with 1 million, CC3M with 3 million, and CC12M with 12 million samples. The representations learned via our objective outperform both contrastive self-supervised and vision-language models trained on the same data across a range of tasks. When training on CC12M, we outperform CLIP by on both ImageNet and ImageNet Real. Our objective appears to work particularly well in lower-data regimes, with gains over CLIP of on ImageNet and on ImageNet Real when training with CC3M. Finally, our objective seems to encourage the model to learn representations that separate objects from their attributes and backgrounds, with gains of - \% over CLIP on ImageNet9. We hope the proposed solution takes a small step towards developing richer learning objectives for understanding sample relations in foundation models.more » « lessFree, publicly-accessible full text available April 24, 2026
-
Bertò, Giulia; Bullock, Daniel; Astolfi, Pietro; Hayashi, Soichi; Zigiotto, Luca; Annicchiarico, Luciano; Corsini, Francesco; De Benedictis, Alessandro; Sarubbo, Silvio; Pestilli, Franco; et al (, NeuroImage)null (Ed.)
An official website of the United States government
